-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
initial draft of last mile latency test #13
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nothing big to say, largely just some food for thought.
(And I'm sure these will evolve over time, regardless.)
if str(params['verbose']).lower() in ['true', '1']: | ||
params['verbose'] = True | ||
elif str(params['verbose']).lower() in ['false', '0']: | ||
params['verbose'] = False | ||
else: | ||
exit_code = CONFIG_ERROR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's cool, but can't we expect the json
module to parse proper JSON Booleans (rather than strings), and expect the configuration to be proper?
I.e. is this sufficient?
if str(params['verbose']).lower() in ['true', '1']: | |
params['verbose'] = True | |
elif str(params['verbose']).lower() in ['false', '0']: | |
params['verbose'] = False | |
else: | |
exit_code = CONFIG_ERROR | |
if not isinstance(params['verbose'], bool): | |
exit_code = CONFIG_ERROR |
return SUCCESS | ||
|
||
|
||
def stdin_parser(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just food for thought, but I have wondered if it wouldn't be useful to make use of proper Python exceptions so long as we're in Python, and leave the exit code stuff to the boundary (in the main
function). E.g.:
PARAM_KEY_TYPES = (
(int, ('attempts', 'timeout')),
(bool, ('verbose',)),
)
def get_params(stdin=sys.stdin, defaults=PARAM_DEFAULTS, key_types=PARAM_KEY_TYPES):
params = dict(defaults, **json.load(stdin))
for (key_type, keys) in key_types:
for key in keys:
if not isinstance(params[key], key_type):
raise TypeError(f'{key}: must be {key_type}: {keys}')
return params
def main():
try:
params = get_params()
except (ValueError, TypeError):
json.dump({'stdin': {'retcode': CONFIG_ERROR, 'message': '…'}}, sys.stderr)
sys.exit(CONFIG_ERROR)
except json.decoder.JSONDecodeError: | ||
continue | ||
|
||
res['src'] = record['src'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to be clear, we expect only one line in the output to both be JSON and have "type": "trace"
? (Otherwise, it appears that subsequent passes will overwrite the result key "src".)
If that's the case, I think this could be made clearer: there's no need to define res
outside of the loop if it's entirely a product of a particular line
. Or, for that matter, this logic of finding the matching line can be separated from the logic of mapping it to a result. Say:
for line in out:
try:
record = json.loads(line)
except ValueError:
continue
if record.get('type') == 'trace':
break
else:
# no line matched!
return None
res = {
'src': record['src'],
'dst': record['dst'],
'attempts': record['attempts'],
}
…
return res
if "dig" not in stderr_res: | ||
stderr_res['dig'] = {} | ||
stderr_res['dig'][params['target']] = stderr_dst |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Too fancy perhaps but you could say tersely:
if "dig" not in stderr_res: | |
stderr_res['dig'] = {} | |
stderr_res['dig'][params['target']] = stderr_dst | |
stderr_res.setdefault('dig', {})[params['target']] = stderr_dst |
Co-authored-by: Jesse London <[email protected]>
Co-authored-by: Jesse London <[email protected]>
Co-authored-by: Jesse London <[email protected]>
Co-authored-by: Jesse London <[email protected]>
Co-authored-by: Jesse London <[email protected]>
Cool, thanks for the ongoing changes. Let me know if you intend to work more on this – you should feel free to do so, but if not, I'm happy to get it incorporated into the rest. |
Adds both task `lml` (executable `netrics-lml`) and task alias `lml-scamper` (`netrics-lml-scamper`) for clarity. This version of the measurement is added to the default configuration but commented out as it may not be enabled by default. For speed and niceness, this lml test will randomly select an endpoint to trace, from a default set of Google DNS as well as CloudFlare DNS. (The former is generally speedier to respond; regardless, it might be "nice" to round-robin, and for that matter to fall back in the event of response failure.) Otherwise: Measurements' schedules in default configuration now use hashed cron expressions to *discourage* their collision (though any which *may not* run concurrently will be configured as such). completes #13 part of #3
Adds both task `lml` (executable `netrics-lml`) and task alias `lml-scamper` (`netrics-lml-scamper`) for clarity. This version of the measurement is added to the default configuration but commented out as it may not be enabled by default. For speed and niceness, this lml test will randomly select an endpoint to trace, from a default set of Google DNS as well as CloudFlare DNS. (The former is generally speedier to respond; regardless, it might be "nice" to round-robin, and for that matter to fall back in the event of response failure.) Otherwise: Measurements' schedules in default configuration now use hashed cron expressions to *discourage* their collision (though any which *may not* run concurrently will be configured as such). completes #13 part of #3
Adds both task `lml` (executable `netrics-lml`) and task alias `lml-scamper` (`netrics-lml-scamper`) for clarity. This version of the measurement is added to the default configuration but commented out as it may not be enabled by default. For speed and niceness, this lml test will randomly select an endpoint to trace, from a default set of Google DNS as well as CloudFlare DNS. (The former is generally speedier to respond; regardless, it might be "nice" to round-robin, and for that matter to fall back in the event of response failure.) Otherwise: Measurements' schedules in default configuration now use hashed cron expressions to *discourage* their collision (though any which *may not* run concurrently will be configured as such). completes #13 part of #3
This work has been merged! 🎆 …Via #32. |
Ported the last mile latency test. Very similar to
netrics-traceroute
. Using scamper to run traceroute while also allowing arbitrary destinations. If the destination is given as a hostname it is resolved because scamper only takes IP addresses as input.